5 research outputs found

    Online representation learning with single and multi-layer Hebbian networks for image classification

    Get PDF
    Unsupervised learning permits the development of algorithms that are able to adapt to a variety of different datasets using the same underlying rules thanks to the autonomous discovery of discriminating features during training. Recently, a new class of Hebbian-like and local unsupervised learning rules for neural networks have been developed that minimise a similarity matching costfunction. These have been shown to perform sparse representation learning. This study tests the effectiveness of one such learning rule for learning features from images. The rule implemented is derived from a nonnegative classical multidimensional scaling cost-function, and is applied to both single and multi-layer architectures. The features learned by the algorithm are then used as input to an SVM to test their effectiveness in classification on the established CIFAR-10 image dataset. The algorithm performs well in comparison to other unsupervised learning algorithms and multi-layer networks, thus suggesting its validity in the design of a new class of compact, online learning networks

    Online representation learning with single and multi-layer Hebbian networks for image classification

    Get PDF
    Unsupervised learning permits the development of algorithms that are able to adapt to a variety of different datasets using the same underlying rules thanks to the autonomous discovery of discriminating features during training. Recently, a new class of Hebbian-like and local unsupervised learning rules for neural networks have been developed that minimise a similarity matching costfunction. These have been shown to perform sparse representation learning. This study tests the effectiveness of one such learning rule for learning features from images. The rule implemented is derived from a nonnegative classical multidimensional scaling cost-function, and is applied to both single and multi-layer architectures. The features learned by the algorithm are then used as input to an SVM to test their effectiveness in classification on the established CIFAR-10 image dataset. The algorithm performs well in comparison to other unsupervised learning algorithms and multi-layer networks, thus suggesting its validity in the design of a new class of compact, online learning networks

    Neural networks for efficient nonlinear online clustering

    Get PDF
    Unsupervised learning techniques, such as clustering and sparse coding, have been adapted for use with data sets exhibiting nonlinear relationships through the use of kernel machines. These techniques often require an explicit computation of the kernel matrix, which becomes expensive as the number of inputs grows, making it unsuitable for efficient online learning. This paper proposes an algorithm and a neural architecture for online approximated nonlinear kernel clustering using any shift-invariant kernel. The novel model outperforms traditional low-rank kernel approximation based clustering methods, it also requires significantly lower memory requirements than those of popular kernel k-means while showing competitive performance on large data sets

    Building efficient deep Hebbian networks for image classification tasks

    Get PDF
    Multi-layer models of sparse coding (deep dictionary learning) and dimensionality reduction (PCANet) have shown promise as unsupervised learning models for image classification tasks. However, the pure implementations of these models have limited generalisation capabilities and high computational cost. This work introduces the Deep Hebbian Network (DHN), which combines the advantages of sparse coding, dimensionality reduction, and convolutional neural networks for learning features from images. Unlike in other deep neural networks, in this model, both the learning rules and neural architectures are derived from cost-function minimizations. Moreover, the DHN model can be trained online due to its Hebbian components. Different configurations of the DHN have been tested on scene and image classification tasks. Experiments show that the DHN model can automatically discover highly discriminative features directly from image pixels without using any data augmentation or semi-labeling

    Exploration and extension of the similarity matching framework: feature learning, nonlinear methods and transformation learning

    No full text
    Similarity matching (SM) is a framework introduced recently for deriving biologically plausible neural networks from objective functions. Three key biological properties associated with these networks are 1) Hebbian rules, 2) unsupervised learning, and 3) online implementations. In particular previous work has demonstrated that unconstrained-in-sign SM (USM) and nonnegative SM (NSM) can lead to neural networks (NN) performing linear principal subspace projection (PSP) and clustering. Starting from USM and NSM, the work undertaken in this thesis \emph{explores} capabilities and performance of SM and \emph{extends} SM to novel sets of NNs and unsupervised learning tasks. The first objective of this work is to explore the capabilities of existing SM NN for feature learning. Representations learned from different SM NN are used as input to a linear classifier to measure their classification accuracy on established image datasets. The NN derived from NSM is employed to learn features from images with single and dual-layer architectures. The simulations show that features learned by NSM are comparable to Gabor filters and that a simple single-layer Hebbian network can outperform more complex models. The NN derived from USM is used for learning features in combination with block-wise histogram and binary-hashing. The proposed set of architectures (USMNet), when evaluated in terms of accuracy, shows competitive against unsupervised learning algorithms and multi-layer networks. Finally, Deep Hebbian Networks (DHN) are proposed. DHN combines within one architecture stages of NSM and USM. The performance of DHNs are evaluated on image classification tasks and outperforms the aforementioned models. The second objective of this work is to extend SM beyond linear methods and static images. To incorporate nonlinearity, kernel-based versions of SM, K-USM and K-NSM, are proposed and map onto NNs performing nonlinear online clustering and PSP, outperforming traditional methods. To incorporate temporal information, a new SM cost-function is applied to pairs of consecutive images to develop the TNSM algorithm. This is mapped onto a NN that performs motion detection and recapitulates several salient features of the fly visual system. The proposed approach is also applicable to the general problem of transformation learning.</div
    corecore